Goto

Collaborating Authors

 spherical circle




Review for NeurIPS paper: CircleGAN: Generative Adversarial Learning across Spherical Circles

Neural Information Processing Systems

Correctness: I like the ideas and concepts of'diversity' and'realness' on the sphere (which is projected by simple L2-normalization), but it is non-trivial to say that proposed objective function actually minimizes some'distance' between real and fake probability distribution. SphereGAN implements IPMs as their objective function and shows the equivalence relation between minimizing Wasserstein distance in hyper-sphere and minimizing objective functions, but this kind of analysis is not dealt in proposed method even if SphereGAN is main baseline method. Thus authors needs to clarify what to minimize. The proposed method uses L2-normalization as a projection onto hyper-sphere which induces information loss as it is not one-to-one (All the conventional features lying in same lay started at origin is projected to same point in hyper-sphere). The stereo-graphic projection not only admits single fixed point where north pole ('center' in the paper) can be rotated transitively on the hyper-sphere.


Review for NeurIPS paper: CircleGAN: Generative Adversarial Learning across Spherical Circles

Neural Information Processing Systems

This paper proposes a new GAN training technique based on intuition about the hypersphere. It attains state-of-the-art IS and FID scores on a few datasets. Reviewers were initially confused and concerned about its similarity to SphereGAN but were convinced it should be accepted after the rebuttal.


CircleGAN: Generative Adversarial Learning across Spherical Circles

Neural Information Processing Systems

We present a novel discriminator for GANs that improves realness and diversity of generated samples by learning a structured hypersphere embedding space using spherical circles. The proposed discriminator learns to populate realistic samples around the longest spherical circle, i.e., a great circle, while pushing unrealistic samples toward the poles perpendicular to the great circle. Since longer circles occupy larger area on the hypersphere, they encourage more diversity in representation learning, and vice versa. Discriminating samples based on their corresponding spherical circles can thus naturally induce diversity to generated samples. We also extend the proposed method for conditional settings with class labels by creating a hypersphere for each category and performing class-wise discrimination and update.